Renormalization Multigrid (RMG): Statistically Optimal Renormalization Group Flow and Coarse-to-Fine Monte Carlo Acceleration

نویسندگان

  • Achi Brandt
  • Dorit Ron
چکیده

New renormalization-group algorithms are developed with adaptive representations of the renormalized system which automatically express only signi cant interactions. As the amount of statistics grows, more interactions enter, thereby systematically reducing the truncation error. This allows statistically optimal calculation of thermodynamic limits, in the sense that it achieves accuracy " in just O(" 2) random number generations. There are practically no nite-size e ects and the renormalization transformation can be repeated arbitrarily many times. Consequently, the desired xed point is obtained and the correlation-length critical exponent is extracted. In addition, we introduce a new multiscale coarse-tone acceleration method, based on a multigrid-like approach. This general (non-cluster) algorithm generates independent equilibrium con gurations without slow down. A particularly simple version of it can be used at criticality. The methods are of great generality; here they are demonstrated on the 2D Ising model. Introduction A Monte Carlo (MC) simulation aimed at calculating an average of a certain observable is called \statistically optimal" if it achieves accuracy " in O( 2" 2) random number generations, where is the standard deviation of the observable. This is just the same order of complexity as needed to calculate, by statistical sampling, any simple \pointwise" average, such as the frequency of \heads" in coin tossing. Our goal is to attain such an optimal performance in calculating much more complicated averages in statistical physics, including in particular thermodynamic limits, i.e., limits approached by the averages of system observables when the system size tends to in nity. Two basic factors usually prevent naive Monte Carlo calculations of a thermodynamic limit from being optimal, even when O( 2" 2) independent samples are indeed enough to average out their deviations down to O(") accuracy. First, to achieve an O(") approximation to the thermodynamic limit, each sample should be calculated on a system of su ciently large volume, that is, a system whose linear size L grows with " 1; typically L " for some > 0. So in d physical dimensions, the required simulation volume for each sample is Ld = O(" d). This factor is called the volume factor. The second factor is the critical slowing down (CSD), i.e., the increasing number n of MC passes needed (at least at the critical temperature) when L grows in order to produce each new (essentially independent) sample; usually n Lz, where z is typically close to 2. As a result of these two factors, the cost of calculating the thermodynamic limit to accuracy " rises as O( 2" 2 d z). Cluster algorithms (such as Swendsen-Wang [1] and Wol [2]) are able to eliminate the CSD factor, i.e., attain z = 0 for certain models. For other models they can only partly lower z, or not at all. Moreover, they leave the volume factor intact. Optimal performance, where both the CSD and the volume factors are eliminated, was rst demonstrated in calculating various thermodynamic limits for Gaussian models with constant coe cients (and also in calculating the critical temperature of the Ising model) [3]. The main tool was the multigrid cycle, which involves coarse-tone acceleration, thus 1 eliminating the CSD, and performs most of the sampling at coarse levels, thus eliminating the volume factor. The technique of inter-level transfer was based (as in classical multigrid) on pre-determined interpolation rules. With increasing sophistication of the multigrid cycling and the interpolation rules, optimal performance has subsequently been accomplished also for massive Gaussian models with variable couplings [4], [5], [6]. For example, it has been shown that the susceptibility of a 2D in nite lattice variable-coupling Gaussian model can be calculated to accuracy " in less than 20 2" 2 random number generations, independently of the maximal ratio between strong and weak couplings (unlike the severe extra slowness that large such ratios can in ict on pointwise Monte Carlo). E orts to extend these interpolation-based multigrid methods to non-Gaussian models have met with only partial success [6], [7], and have eventually led to the techniques described in this report. These techniques include a couple of interconnected procedures, collectively called the renormalization multigrid method (RMG), since they combine ideas previously advanced in both those disciplines. In this report we will show that the RMG method yields statistically optimal calculations for the 2D Ising model. It will however be clear from the description (and from the discussion in Section 6) that the method is very general. Indeed, it has already preliminarily been applied to the XY model, demonstrating optimal results [7]. Proper modi cations of the RMG method are now being introduced to such diverse models as molecular mechanics of macromolecules ([8], Section 14.6 and [9]) and atomistic models of uids ([8], Section 14.7 and [10]). An analogous method is even being developed for solving deterministic sets of equations [11]. Moreover, RMG is applicable even for many systems which are not governed by a Hamiltonian. For simplicity, the new techniques are surveyed here in terms of the 2D Ising model with the majority-rule coarsening, as they were rst developed. We begin with a short review of the necessary physical background aimed for the unfamiliar reader. A detailed numerical description for the transition probabilities of the block level is presented next, followed by a comparison to the classical coupling-constants representation. Next it is explained how this approach can be used for the calculation of the xed point and the correlation-length critical exponent. The Monte Carlo coarse-tone multiscale acceleration method is then introduced. Finally, the extension to continuous-state models is brie y discussed. This article is a revision of [12]. Related publications are [13], [14]. 1 Physical Background We brie y summarize some relevant topics in statistical mechanics including the Ising spin model, the Monte Carlo simulations and the Renormalization group formalism. 1.1 Classical Monte Carlo Method Consider a two dimensional lattice of size L L, where each site i is occupied by an Ising spin si (which can assume only two values: +1 or 1). The spins are related by the nearest 2 neighbor (nn) HamiltonianH(S) = KnnSnn ; Snn = X sisj ; (1) where S is a con guration (i.e., a realization of the spins fsig at all sites), Knn is the associated coupling constant (assumed to absorb 1=(kBT ), where T is the temperature and kB is the Boltzmann constant) and the sum runs over all distinct nearest neighbor pairs < i; j >. Periodic boundary conditions are assumed. The probability of a certain con guration S is given by the Boltzmann distribution P (S) e H(S) : (2) The computational task is to calculate macroscopic physical observables of the system < O > =XS O(S)P (S) ; (3) where O(S) is a property such as the two-point correlation function (at distance d) (d) = L 2Pji jj=d sisj (where ji jj indicates the geometric distance between sites i and j), the magnetization M = L 2Pi si, etc. It is of special interest to calculate the thermodynamic limits (i.e., the limit as L ! 1) of such averages at the critical temperature Tc (Knn = :4406868) associated with the magnetic phase transition. The high temperature disordered phase is characterized by zero magnetization (M = 0), while as the system is cooled it undergoes a transition to the magnetically ordered phase in which most spins tend to align. The prime mission of Monte Carlo simulations is to generate a sequence of con gurations S1; S2; :::; Sn, such that each con guration appears a number of times proportional to its probability as given by Eq. (2). This restricts the sampling mainly to the most probable (important) con gurations. The evaluation of< O > is then obtained by a direct average over those n con gurations: < O >= n 1Pni=1O(Si), thus avoiding the enormous summation in Eq. (3). The traditional MC creates the next con guration by changing the spins one by one. Consider a single spin si. According to the Heat Bath rule, for example, the probability of replacing it by si is given byP (si ! si) = 1 1 + e2 H ; (4) where H = Pj: sisj , which depends only on the four nearest neighbors of i, the spins at sites marked by 1 in Figure 1. The accuracy of the calculations depends on the size of the statistical sample n, on the lattice size L and on the correlation length . ( is characterized by the fact that two spins separated by a distance small compared with are expected to be found more parallel than anti-parallel.) Near the critical temperature the correlation length diverges as (T Tc) ; (as long as L ) (5) where is the universal correlation-length critical exponent known for this model to be equal to unity. (According to the \universality hypothesis" all other systems of two-dimensional 3 uniaxial magnets share the same as well as other critical exponents.) Because diverges at Tc, long heat-bath runs are needed to counter the well known critical slowing down, (i.e., the phenomenon, typical to simulations of critical systems, that when L increases so does the number of MC passes needed to produce a new con guration which is substantially independent of a former one). This problem has been extensively addressed using cluster algorithms (consider [15] and references therein), multigrid methods ([3], [4], [5] and [16]) and more. A di erent approach for studying systems at the critical temperature is the renormalization group method which is presented next. 1.2 The Renormalization Group Methods The main di culty posed by systems near their critical points is the necessity to deal simultaneously with many length-scales. The phenomenological renormalization group (RG) method has been widely used in MC simulations of such systems. Consider a general action of the form H(S) =X K S ; (6) where each \interaction" S is a sum of all spin products of some type, (e.g., a sum of all nearest-neighbor products, denoted by Snn in Eq. (1); a similar sum with next nearest neighbors, etc.), each K is the corresponding coupling constant and the sum runs over all existing possible interactions on the given grid. The probability of a con guration S is still given by Eq. (2). The RG transformationR is de ned as the projection of a larger ( ne) grid onto a smaller (coarse) grid, consisting of fewer degrees of freedom. In practice, the ne grid is divided into cells (of linear size b). Each cell is then assigned with a single coarse variable according to some prescription. For example, consider the majority-rule with b = 2: The coarse grid consists again of a 2D array of Ising spins fs0ig, each of which being a \block-spin", i.e., its sign representing the sign of the spin sum in a corresponding 2 2 block of ne-level spins. (The sign of zero is taken to be + or , each with probability 1/2). The coarse grid con guration S 0, thus produced, is governed by a \renormalized Hamiltonian" H0 = R(H), which again has the general form H0(S 0) =X K 0 S 0 ; (7) where the interaction S 0 has the same form as S , but involving the block-spins fs0ig. Conventionally, R is viewed as operating in the space of the coupling constants: fK g R ! fK 0 g . If the starting Hamiltonian is critical (i.e., given at Tc), by repeating R a su cient number of times, the transformation eventually reaches its attraction point, the critical xed point H = R(H ) . Any action which is in the vicinity of the critical point H = H + H can be expanded in terms of the eigenvectors qi of the linearized form of R, i.e., H = H +Xi aiqi ; R(H) H +Xi ai iqi ; (8) 4 where i is the eigenvalue associated with qi. Thus, the deviation from H either grows or diminishes under renormalization according to whether j ij is larger or smaller than unity. In general, there may be few eigenvalues obeying j ij > 1, each describing a relevant direction along which R ows away from the xed point. For the Ising model, there is only one relevant eigendirection (the temperature-like). Its eigenvalue gives the rate of ow away from the xed point and is related to the correlation-length critical exponent , = log b log : (9) Hamiltonians that ow into a particular H are all spanned by the irrelevant directions qi for which j ij < 1. In the starting Hamiltonian, on the nest grid, all couplings but the one associated with nearest neighbors vanish, i.e., Eq. (6) boils down to Eq. (1). The coarse grid action H0 has to compensate for all the ne interactions lost while coarsening, and clearly it may need many more interactions than merely the nearest neighbors. More generally, the coarse Hamiltonian should consists of all possible couplings that t on a given size lattice. Since not all could be included in the simulation, the main question is clearly which couplings are most important. The basic assumption of the renormalization group method is that the renormalized couplings K 0 fall o exponentially with the distance between the spins and with the number of spins in each product in S . Under that assumption, the general approach is to take all (or most seemingly important) couplings in a pre-chosen restricted distance of interacting spins, and ignore all other (e.g., longer range) couplings. The xed errors introduced by taking such a nite number of couplings is referred to as \truncation errors". Many di erent methods have been proposed over the years for calculating those coarse couplings associated with the renormalized Hamiltonian. For a brief review consider [17] and references therein. However, to the best of our knowledge, no systematic approach has been developed that would select the couplings according to their signi cance at a given level of statistical sampling, to roughly match the truncation error with the statistical sampling errors. An extensively used version of RG is the MC renormalization group (MCRG) [18]. In the MCRG, MC simulations (or, when applicable, some cluster updates) are carried out only with the original Hamiltonian, on a grid of some given linear size L. On the produced sequence of con gurations a number of successive renormalization blockings is performed, producing corresponding sequences of increasingly smaller con gurations of block-spins. The method enables approximate calculation of properties of the RG ow, such as critical exponents, without direct knowledge of the renormalized Hamiltonian. However, it still involves the explicit de nition of the coarse action. The number of times R can be applied is limited by L, the starting-lattice size. This may result in lack of convergence (to the xed point) which is the second source of systematic error in the MCRG calculation (the rst being the truncation error). A third source of error is nite-size e ects caused by the consistent decrease of the linear size of the con gurations being analyzed. For more details, consider for instance [19]. Our present work is aimed at overcoming the above drawbacks. To avoid the nitesize e ects and to allow enough renormalization steps we choose, once more, to actually do calculate the renormalized Hamiltonian. Indeed, as explained below, we have succeeded in 5 developing a novel numerical method that automatically and systematically constructs the transition probabilities of the block-spin (coarse) level. 2 Systematic Representation of Block-Level Transition Probabilities We introduce our method rst for the simplest case of nearest neighbors Hamiltonian. We will then generalize it to a larger range of interactions. Next, we describe the automatic adaptive approach in which this range grows optimally as function of the invested amount of statistical work. Finally, we present some results, exhibiting the statistical optimality of the method. 2.1 Recovery of Nearest Neighbors Hamiltonian The numerical method presented here is based on the following, rather simple observation. In order to perform a Monte Carlo simulation over the lattice, one need not actually know the explicit structure of the Hamiltonian as given, for example, by Eq. (1). Instead, this formula can be replaced by a table of numbers which gives the conditional probability P 4 +(s1; s2; s3; s4) for a spin to be +1 given the values (s1; s2; s3; s4) of its 4 nearest-neighbor spins (the spins marked by 1 in Figure 1). These conditional probabilities are exactly all one needs for carrying out the MC sweep and generate con gurations with the desired Boltzmann weights. Conversely, from a given sequence of con gurations in equilibrium, the P 4 + table can easily be estimated by a simple pointwise scan: For each entry (s1; s2; s3; s4) the total number of occurrences of this neighborhood, and the number in which the middle spin is +1, are counted. The ratio between the latter and the former clearly gives an estimation for P 4 +(s1; s2; s3; s4). In fact, in this case (of just 4 neighbors), due to the symmetries of the model ( ipping, rotating and re ecting), only two \equivalence classes" of neighborhoods need to be distinguished: The one in which all 4 spins have the same sign and the one where exactly one of them has an opposite sign. The case for which s1+ s2+ s3+ s4 = 0 is a-priori assigned with P 4 +(s1; s2; s3; s4) = 1=2. All the neighborhoods within the same class must have the same P 4 +, or the same P 4 + upon ipping, hence only their collective statistics needs be gathered. This observation can be further used to calculate an estimation for the coarse grid Hamiltonian. From an MC simulation of equilibrium on a given ne grid, construct P 4 + for the corresponding sequence of coarse con gurations obtained by applying the majority-rule projection R. Then this (coarse) P 4 + table can be used to simulate the block-spin system and calculate the P 4 + table for the next coarse level (consisting of blocks of block-spins) and so on.The problem is, of course, that the coarse grid action has a longer range than merely nearest neighbors. Next we explain how the conditional probability tables can be extended to represent more general Hamiltonians. 6 ... 8 7 5 7 6 4 3 4 6 7 4 2 1 2 4 7 . . . 8 5 3 1 si 1 3 5 8 . . . 7 4 2 1 2 4 7 6 4 3 4 6 7 5 7 8... Figure 1: The marked 40-spin neighborhood of a spin si: Each mark is associated with a layer of all the spins sharing the same distance from si. 2.2 Generalization: Larger Neighborhoods A table of conditional probabilities, similar to P 4 +, can of course be constructed for bigger neighborhoods. For example, P 8 + is produced by considering the 8-spin neighborhood consisting of the nearest and next-nearest neighbors, correspondingly marked by 1 and 2 in Figure 1. By using the above symmetries, the total number of possible di erent neighborhoods 28 = 256 in the P 8 + table is reduced to just 27 equivalence classes (out of which 3 are automatically assigned with the probability 1/2). By considering even more distant spins (those marked by 3), the P 12 + table can be constructed (with 314 entries, one for each equivalence class), and so forth. Clearly, the size of the P+ tables, thus constructed, grows rapidly. A closer observation would immediately indicate, however, that not all entries have the same importance: Only few are probable, while the rest are rare and contain little statistics. This leads to the following (relatively slowly growing) adaptive structure. 2.3 Adaptive Construction of the Pm + Tables The size m of the considered neighborhoods and that of the corresponding Pm + table should actually depend on the amount of statistics being gathered upon running the MC simulation. If only a small amount of statistics is gathered, only the four nearest neighbors are considered and P 4 + is constructed. With more statistics, all eight closest (nearest and next-nearest) neighbors (marked by 1 and 2 in Figure 1) are considered to construct P 8 + (consisting of 27 equivalence classes). Since not all appear with similar frequencies, some being much more common than others, it is natural and straightforward to further increase the size of the considered neighborhood only for the most probable ones. That is, when the amount of 7 statistics for a particular entry, say in P 8 +, is su ciently large, that neighborhood is split , i.e., statistics is gathered for its \child neighborhoods": These are neighborhoods consisting of 12 spins (marked by 1,2 and 3 in Figure 1) with the same inner 8 spins as in the \parent" (split) neighborhood, but with some (or all) combinations of the four subsequent neighbors (marked by 3 in Figure 1). Thus, the obtained P 12 + table contains information only for the children of the most probable con gurations in the P 8 + table and does not necessarily include all 12-spin neighborhood possibilities. The general rule is to split a neighborhood when some of its children have enough statistics to make the di erence between their P+ values signi cant, i.e., larger than their standard deviations. Furthermore, not all o springs of such a split parent have a separate P+ entry: Only those children exhibiting a signi cant change in their P+ compared with their parent's P+ are tabled separately, while all others (insigni cant in their P+ deviation, mostly due to lack of enough statistics) are grouped (merged) into just one additional equivalence class. Thus, the P 12 + table has a variable size, with the number of entries depending on the number of splits (and on the type of the splits, as explained in Section 2.4) occurring from P 8 +. Note that the merging rules are important only for reducing the size of the constructed tables. The MC simulation which uses these tables is not a ected if the splits are employed less carefully and even if merging is completely avoided, as the e ective P+ value for the parent remains the same either way. The splitting process can be repeated: Children with enough statistics (in P 12 + ) may further be split into grand-children with a larger neighborhood (including also spins marked by 4 in Figure 1 to create P 20 + ), and so on. The overall resulting table will have the structure of an unbalanced tree: unequal number of o springs for di erent nodes. The tree root connects all possible equivalence classes of some small m-spin neighborhood. The most probable nodes (neighborhoods) split into children, the most probable of those further split into grand-children, etc. A schematic possible tree-like structure for 12-spin neighborhoods is shown in Figure 2: The 11 doubleframed nodes (i.e., exactly all leaves) are the entries of the associated P 12 + table, where by \all others" we refer to the merged o springs, as explained above. 2.4 Additional Algorithmic Details For convenience, in order to construct the adaptive table of conditional probabilities we usually use a pre-run of length, say, 1/10 of the upcoming actual run. During that pre-run all the decisions concerning splitting are made. Thereafter the structure of the tree is known and remains unchanged. In the current status of the algorithm we haven't fully optimized the process of splitting, further code development is still needed. Nevertheless, we found that the splits (of some particular neighborhood to its subsequent children) should better be executed gradually. That is, all possible children should rst be grouped into a small number of \clans". For instance, all those having the same sum of all spins in the outer layer can be put in the same clan. If the number of spins in the outer layer is k, this sum can assume only k+1 di erent values (k; k 2; :::; 0; :::; k), and thus splitting into such clans can generate at most k + 1 o springs (compared to 2k di erent children without this grouping). Only clans 8 all others s s s s s s s others others all s s s s i i i i i i i i i i i all all others Figure 2: A typical schematic tree-like structure of the P 12 + table constructed for all the spins marked by 1,2 and 3. The leaves, i.e., the double-framed nodes, correspond exactly to all the entries of this particular P 12 + table. with enough statistics will further split. With this approach the number of neighborhoods grows at a much slower pace. In fact, an even better strategy should be the following. For each probable neighborhood, a candidate for splitting, consider at rst more than just one possible split, e.g., the split to the next layer's clans, versus the split of the current clan into detailed con gurations. Each possible split is then being evaluated, the one with the largest spread (average child deviation) should be adopted. (Statistics for several possible splits can be accumulated simultaneously in the same MC run.) This would enable a better tuning of the algorithm for achieving full optimality while reducing the number of neighborhoods altogether. This last idea has not yet been implemented. 2.5 Optimal Results in Calculating the P+ Tables The P+ tables represent the coarse-level transition probabilities. Indeed, it is all we need (and exactly what we need) to run an MC simulation on that level (the level of blocks). 9 Also, due to the adaptability in the size of the neighborhoods, the calculation of the P+ tables is statistically optimal, in the sense that it automatically acquires accuracy " when the amount of statistics (the total number of random-number generations in producing the P+ tables) is O(" 2), without assuming any self-averaging. This claim has been con rmed by the following numerical tests, in which the observables of interest are themselves particular values of the Pm + table for the next, still coarser level. From simple heat-bath MC simulations on the nest level we calculate the Pm + tables of the rst coarse level, denoting them by g Pm + . From MC simulations on this rst coarse level (the level of blocks of spins), using those g Pm + tables, we then also calculate g Pm + , the Pm + tables for the second coarse level (the level of blocks of blocks). Note that several di erent Pm + tables can be calculated simultaneously in a single MC simulation on a given ne grid. So in the same MC run on the nest grid (using Hc, the original Hamiltonian at Tc), we actually construct four di erent g Pm + tables: three tables, those with m = 4; 8 and 12 (where for m = 12 the four spins marked by 3 in Figure 1 are grouped into their 5 possible sums), are used only as the observables measured on the rst coarse level; the fourth table, which we will denote by (g Pm + )MC , is typically constructed for a much larger m (or a more detailed one in case m = 12), depending on the length of the MC run (i.e., the amount of statistics). This (g Pm + )MC is the table used for applying the MC simulations on the rst coarse level, from which the g Pm + tables are measured. Three such tables are actually measured: g P 4 +, g P 8 + and g P 12 + ; in this experiment the second-coarse level serves merely for the observable calculations and no MC passes are ever executed on it. At each level, the resulting tables (for m = 4; 8 and 12) are then compared with those obtained by a very long Monte Carlo simulation on the original grid, using a cluster algorithm (e.g., Wol ). During this original-grid simulations, for each con guration we perform two successive majority-rule projections to obtain the rst (blocks) as well as the second (block of blocks) coarse con gurations, for both of which Pm + tables (denoted by Pm + and Pm + , respectively) are measured. The errors we measure are then de ned by Error(m; 1) =Xi j(g Pm + )i (Pm + )ij fm i ; Error(m; 2) =Xi j(g Pm + )i (Pm + )ij fm i ; (10) where i runs over all the entries (in the corresponding table) and fm i is a non-negative number proportional to the amount of statistics gathered for each (Pm + )i, with Pi fm i = 1 , and fm i is analogously de ned. The results of such calculations on a 32 32 nest grid at the critical temperature Tc (Knn = :4406868 in Eq. (1)), are presented in Table 1, where we use the notation n(L;H) to specify the number of MC sweeps employed on an L L grid using the Hamiltonian H. Equilibration on the 32 32 grid was rst achieved by 1000 MC sweeps starting from a uniform con guration. In all columns (1-7) we present the errors measured for the rst and the second coarse levels (averaged over an ensemble of 16 systems). Clearly, in columns 1-4, the errors are halved as the amount of statistics (with correspondingly growing number of neighborhoods) is quadrupled, demonstrating typical optimal behavior. The number of neighborhoods grow faster than it optimally should, since we haven't completely automatized the algorithm with respect to employing minimal number of splits. We have, however, 10 succeeded in reducing the number of neighborhoods much further, as shown in column 6 (compared with column 2), by manually tuning the parameters controlling the necessary splits of the 12-spin neighborhood into its 20-spin-neighborhood children. Also observe (by comparing Error(m; 2) in columns 1-2 with those in columns 5-6, especially form = 12) that all errors should of course be reduced by su ciently increasing n(16;(g Pm + )MC), the amount of MC sweeps (i.e., the statistics) carried on the coarse grid, if one wants to isolate the errors introduced only by the nite statistics n(32;Hc) used for producing (g Pm + )MC and not by the lack of enough statistics in measuring on it the corresponding \observables" g Pm + . In particular note that for large enough n(16;(g Pm + )MC) the Error(m; 2) becomes essentially independent of m. Finally note that by increasing only the amount of statistics, while keeping the number of neighborhoods in (g Pm + )MC xed (compare columns 5 and 7), the optimal behavior no longer holds: The errors of the second coarse level are dominated by the truncation error (i.e., truncated neighborhoods) and remain practically unchanged. The required increase in the number of neighborhood is, however, modest: the increase from 304 (column 5) to 461 (column 6) is enough, while the further increase to 1406 (column 2) is no longer helpful. 2.6 Errors The principal sources of errors in the above processes are the nite statistics, the truncation error imposed by the truncated size of the neighborhoods for which P+ is calculated, and the nite size of the lattice employed at each level. The latter type of error is easily removed since arbitrarily large lattices can be used at any coarse level, as the simulation is done directly there, and not through simulations at the nest level. Because of the general numerical form of the P+ tables, the cluster techniques are inapplicable on those coarse levels. However, they are also unnecessary, rst of all due to the near-locality (see Section 3) nature of the P+ calculations at all levels. That is, a very good rst approximation for the P+ tables is already obtained by employing just few simple (heat-bath) MC passes (their number is independent of the lattice size, even starting from a completely random con guration). In particular, it is not necessary to obtain global equilibrium; it is enough to achieve equilibrium only in a scale comparable to the size of the considered neighborhood. This is evident in Figures 3 and 4, where the errors in calculating P 8 + and P 12 + of the rst-coarse-grid (Error(8,1) and Error(12,1) de ned in Eq. (10)), are plotted (on a double logarithmic scale) versus n, the number of MC sweeps employed starting from a random con guration. For small n (n 10) results for di erent gridsizes practically coincide, exhibiting a power law decay (e.g., Error(8; 1) / n 1:6). For larger n, apparently due to nite-size e ects, the rate of convergence for smaller grids is somewhat slower, while results for large enough grids still coincide with each other. Also observe that all results easily exceed the speed needed for optimal behavior, shown by the dashed lines (where the error decreases by a factor of 1/10 when the amount of work increases by a factor of 100). If a case arises for which faster equilibration and sampling may be needed, they can be achieved by the method of Section 5. The nite-statistics errors are well controlled so as to keep all of them, at all levels, at 11 Table 1: The errors in measuring Pm + of the rst and the second coarse levels (for m = 4; 8 and 12), using increasingly better approximations and larger neighborhoods for the rst coarse level. Hc is the original Hamiltonian at Tc and the numbers in parenthesis indicate the deviations in the last decimal digits. 1 2 3 4 5 6 7 n(32;Hc) 104 4 104 42 104 43 104 104 4 104 4 104 the m in (g Pm + )MC 12 20 60 84 12 20 12 number of entries 304 1406 5666 23198 304 461 304 in (g Pm + )MC n(16;(g Pm + )MC) 4 104 42 104 43 104 44 104 44 104 44 104 44 104 Error(4,1) :00050(4) :00031(4) :00010(2) :00005(1) :00050(3) :00023(2) :00022(2) Error(8,1) :00064(2) :00032(1) :00017(1) :00008(:4) :00064(2) :00032(1) :00032(1) Error(12,1) :00122(2) :00062(2) :00030(1) :00015(:4) :00122(2) :00061(1) :00060(1) Error(4,2) :00126(20) :00061(9) :00029(6) :00011(1) :00134(8) :00061(4) :00105(4) Error(8,2) :00150(11) :00074(6) :00038(3) :00020(1) :00140(4) :00068(3) :00123(2) Error(12,2) :00197(11) :00101(6) :00050(3) :00025(1) :00149(5) :00074(3) :00130(2) the same optimal order ", where the amount of statistics is O(" 2), as was demonstrated above. The truncation errors are also kept at O("), by adjusting the neighborhood sizes; it is estimated that the linear size of the considered neighborhoods should grow very slowly, e.g., proportionately to log(" 1). The only remaining trouble is the error enhancement from level to level, due to the renormalization ow divergence away from the critical surface, whose treatment will be discussed in Section 4.2. 12 1e-05 0.0001 0.001 0.01 0.1 1 1 10 100 E rr or (8 ,1 ) n Number of MC sweeps MC 16x16 MC 32x32 MC 64x64 RMG 16x16 RMG 32x32 Figure 3: A double logarithmic plot of Error(8,1) (de ned in Eq. (10)) as a function of n, the number of overall MC sweeps, starting from a random con guration. Results are shown for simple MC simulations (on lattices 162, 322 and 642) and for the TCFE (on 162 and 322) using the critical Hamiltonian (see Section 5; in this case the CMC and PR passes are included in n). 3 On the Form of Coarse Actions A general property of coarse (block) levels, in the present model as in most other physical systems, is the near-locality of the dependence on neighborhood. That is: the conditional probability distribution of the state at a point A, given xed states at all other points, depends mainly on the states of the closest neighbors: the average dependence decays exponentially with the distance from A. (For example, if the neighborhood of A is changed only at points at distances larger than r from A, the conditional probability to have +1 at A can change at most by O(exp( cr)), with some (unknown) constant c.) [A comment for more general models: The near-locality property of the blocked variables indirectly holds even in the case of long-range interactions, such as electrostatic or gravimetric interactions. Indeed, each such interaction can be decomposed into the sum of a smooth part and a local part (where \smooth" and \local" are meant relative to the particular scale of the next coarser level). All the smooth parts can be transferred (anterpolated) directly to the coarse level (see [20] and [21]), hence it is only the local parts that remain to be expressed 13 0.0001 0.001 0.01 0.1 1 1 10 100 E rr or (1 2, 1) n Number of MC sweeps MC 16x16 MC 32x32 MC 64x64 RMG 16x16 RMG 32x32 Figure 4: A double logarithmic plot of Error(12,1) (de ned in Eq. (10)) as a function of n, the number of overall MC sweeps, starting from a random con guration. Results are shown for the same cases as in Figure 3. on the coarse level. For this expression the near-locality property still holds.] The near-locality property is of course the motivation behind our approach for the construction of the tree of neighborhoods in terms of which the P+ table is expressed. It allows us a very systematic branching, which can take far neighbors into account only at the particular combinations and circumstances where their in uence is statistically signi cant. The traditional framework of constructing a coarse-level Hamiltonian (by tting coupling constants) does not allow this exibility. Moreover, it forces one to introduce unnecessarily strong far interactions. To see this, consider again the P 4 + table (see Section 2.1), the table related to nearest neighbors. It contains two entries, i.e., two conditional probabilities, while there is only one nearest-neighbor coupling constant Knn (as in Eq. (1)). Thus, the P 4 + table can describe a system which is not governed by a nearest-neighbor Hamiltonian. To t such a P 4 + table, a Hamiltonian description is forced to add farther interactions. Similarly, to t any Pm + table the Hamiltonian description is forced to add substantially farther interactions. But by the near-locality property, these farther interactions would generally be much weaker than the Pm + interactions one attempts to t. 14 Note also the fact that the Boltzmann form of probability distributions results from direct physical reasons (like equipartition of energy) which no longer hold for coarse (block) levels (where only a (small) portion of the energy is active). It is thus reasonable to expect that at those levels a more general form of probability distribution will be more suitable. In particular, our presentation does not su er from the \peculiarities" of the commonly used discrete-spin RG transformations pointed out by Gri ths and Pearce [22]. As explained in [23], all such pathologies arise form the usual assumption that the renormalization map is de ned as a map from Hamiltonians to Hamiltonians. Under this assumption, there are special cases where the renormalized Hamiltonian may fail to exist altogether. Our conditional probability tables have no such restriction. These tables can always be calculated and in the special cases where the renormalized system must include signi cant longer range interactions, this will be detected by the automatic adaptive construction of the tables and will be taken into account (to the extent that those interactions are indeed signi cant at the current amount of statistics). This will provide a possible cure for describing those (rare) events. 4 Fixed Point Algorithms and Critical Exponent As an application of the RMG scheme, we next present two di erent algorithms which converge to the xed point of the RG ow. We assume that the RG transformation R has a unique xed-point (in terms of P+ table of conditional probabilities) and show that it is obtained in a certain number of iterations (renormalization steps), all carried out with the same lattice size L (unlike the MCRG iterations, where the measurements must be performed on successively decreasing gridsizes). The rst method is based on a perturbation (in the relevant direction) introduced into the current approximation to the xed point, while in the second we employ a \back-to-criticality" mechanism as explained below. Results for the correlation-length critical exponent follow. 4.1 Fixed Point Algorithm by Isolating the Relevant Direction In the case of the 2D Ising model, the xed point is quickly approached by a short sequence of coarsening projections (renormalization steps). Let, at a certain stage, the vector P 0 represent the current ne-grid P+ table of conditional probabilities previously calculated. The RG transformation R operates on P 0 to produce the coarse-grid P+ table, which we denote P 1: P 1 = R[P 0] : (11) The vector P , which obeys P = R(P ), is the desired xed point. As explained in Section 1.2, by applying R enough times (on a P close enough to P ), all irrelevant directions diminish, leaving the relevant direction as the dominant perturbation to P . For any perturbation q to a vector P (representing a P+ table) we de ne the norm k q k2 = Xi wiqiqi ; (12) 15 where wi is determined by some rules aimed at minimizing the error in the calculation of below (i.e., minimizing the statistical errors in Eq. (14)), from which it follow that wi fi=(Pi(1 Pi)), with fi as in Eq. (10) , Pi being the i-th entry in the P vector and Pi wi = 1 . If the construction of P has been performed very carefully (as described in Sections 2.3 and 2.4) then q = q. Otherwise, q should better be obtained from q by replacing for each parent all entries (qi) of child neighborhoods which have little statistics, with their weighted average (using wi for the weighting). This to avoid a possible bias in the norm calculation (Eq. (12)) that may occur due to the contribution of large deviations (whose multiplication each by itself would not average out). A vector q is called normalized if k q k= 1. Let q0 be a normalized approximation to q (the exact normalized relevant direction) obtained at the previous stage of the algorithm, together with the approximation P 0. Denote by the eigenvalue associated with q (which is in magnitude the largest eigenvalue). Each iteration of the xed point algorithm combines two parts. In the rst part we calculate better approximations for q and . This is achieved by applying R twice using the same gridsize (L L) for both projections: P 1 = R(P 0) ; P 2 = R(P 0 + Cqq0) ; (13) where Cq 1 is a constant, called the perturbation coe cient. At criticality P 1 = P 0 = P , while P 2 = P + Cq q +O(C2 q ) . The desired critical exponent can immediately be derived from by Eq. (9), which in turn is estimated by =Xi wiq1 i qi = Xi wiCqq0 i qi ; (14) where q1 = P 2 P 1 . The new approximation to q is then de ned as e q = q1= k q1 k . In the second part of the iteration the task is to calculate e P , a better approximation for the xed point P . We choose e P = P 1 + x e q ; (15) where x is such that k e P (P 0 + xe q) k2 is minimal. x is thus designed so that xe q (nearly) cancels any remaining component in the relevant direction still appearing in P 0. The next iteration is repeated for q0 e q and P 0 e P , applied again on the same grid size as the previous iteration. In principle, the xed point algorithm should consist of a sequence of steps, each consisting of several iterations of the (two-part) type de ned above. From step to step the amount of statistics should signi cantly increase, for instance by a factor of 16 (either by employing a growing number n of MC sweeps on a given xed L L grid, or by increasing L, or both), along with the automatized, adaptive increase of the neighborhood's size, supported also by a more accurate equilibration. Also, the perturbation constant Cq should correspondingly be reduced. (Observe that the statistical error involved in the calculations of the P+ tables is proportional to L 1n 1=2, while the error in P 2 is O(C2 q ). Since both errors need to be reduced approximately at the same rate, it follows that upon increasing the amount of statistics by a factor of 16, i.e., decreasing the statistical error by a factor of 1/4, Cq needs to be reduced by a factor of 1/2.) The rst iteration (in each step) is mainly dedicated 16 for obtaining the new current variables from the former ones. That is, the new P+ table is constructed for larger neighborhoods (whose choice is based on the neighborhood-frequency information accumulated in the P+ table of the previous step), and its values are initialized by substituting the parent value into all its new children; the new q vector is similarly initiated and the current initial con guration is simply the last one in the former iteration; then all those values are being updated during the rst iteration and serve as (the most updated at this stage) input for the following iterations. These additional iterations in the same step (i.e., more iterations each with the same amount of statistics, the same set of neighborhoods and the same Cq) are needed, because the accuracy in calculating depends not only on the accuracy of the iteration but also on that of the input (q0 and P 0). Thus, step by step, a sequence of systematically improved approximations for the xed point should in principle be generated, where the overall amount of work is dominated just by the work invested in the very last step. In practice, each of our calculations was conducted mostly with one step and many iterations, all using the same (arbitrarily large) lattice, the same amount of statistics, the same set of neighborhoods and the same Cq. (Previous steps with smaller neighborhoods and much less amount of statistics were used only to obtain rst approximations for q0 and P 0.) We have calculated the average and standard deviation of over the ensemble of iterations, discarding the rst several of them. Results are given in Section 4.3. 4.2 Fixed Point Algorithm by Repeated Criticalization In critical calculations, errors introduced at any level are magni ed in the level derived from it (the next coarser level), and so on, due to the strong divergence of the renormalization ow away from the critical surface. To check this magni cation, a mechanism should be added at each level to project the action P+ tables back onto the critical surface. Such a \criticalization" mechanism also facilitates calculating renormalization ows toward a xed point when the critical temperature of the initial ( nest-level) Hamiltonian is not known in advance. The criticalization of a given P+ table is done by multiplying the temperature by a suitable factor 1= . In terms of the P+ tables, this means to raise each probability to the power , then normalize; i.e., to replace each value of P+ by P +=[P + + (1 P+) ]. The criticalization factor can be estimated in a number of ways. In our xed-point calculations we found it convenient to derive from quantities we were calculating anyway, such as the next-level P+ values and the estimated value of the correlation-length critical exponent (whose derivation is discussed above). More exactly, in the same MC runs in which P+ statistics are gathered for the 2 2 blockspins, similar P+ statistics are also gathered for the 4 4 blocks (meaning, more precisely, 2 2 block of 2 2 blocks), and for the 8 8 blocks, etc. At criticality, these di erent statistics for di erent block sizes would coincide (at least for su ciently large blocks; but close to the xed point even for small blocks; the use of the latter is preferred since their P+ tables are based on more statistics and faster equilibration). The di erences between P+ at two di erent block sizes, together with an (even rough) knowledge of the critical exponent, 17 easily yields an estimate for the needed criticalization factor . This criticalization process may be repeated several times, until those di erences between the P+ values at di erent block sizes become comparable to the statistical noise. Actually, however, such a repetition is not needed: Applying the process just once at each coarsening step (each renormalization stage) is enough to drive the P+ table at subsequent levels ever closer to the critical surface. Even better is to apply the criticalization factor directly to the next P+ table (the one that has currently been calculated for the 2 2 blocks). The return-to-criticality cost is then really negligible. The xed point of the renormalization group is quickly approached by a sequence of coarsening steps (all implemented successively on the same L L gridsize), as described above, with a criticalization factor applied to each new P+ table in the sequence. Since each iteration should involve a growing amount of statistics (together with an enlarged neighborhood size), the amount of work is, again, dominated by the last iteration. As the xed point is obtained, the derivation of the eigenvalue is as described is Section 4.1. A note on calculating Tc. By observing the P+ tables over few subsequent renormalization transformations for a given temperature, it is easy to determine whether the temperature is superor sub-critical. One can therefore trap increasingly narrower intervals around Tc. Provided of course that the P+ tables become increasingly more accurate (more statistics and correspondingly larger neighborhoods) when narrower intervals are reached. At the same time also increasingly higher levels of renormalization (higher levels of blocking, each with its P+ table) should be produced. Note however that the algorithm needs only infrequently return to the lower levels, because, to a rst approximation around the xed point, there exists a linear relation between temperature increment at the nest level and increments in the P+ tables at all coarser levels. 4.3 Results for The calculation of , as given by Eq. (14), has been extensively tested for varying mspin neighborhoods, values of the constant Cq, amounts of statistics (in calculating Pm + ) and gridsizes. As results from Eq. (14), the standard deviation in the calculation of is proportional to 1=Cq, taking into account that the statistical errors in the calculations of P 1 and P 2 are not related to each other, so they do not cancel out in their di erence q1. The standard deviation marginally grows also as a function of the neighborhood's size, but this may well be due to the imperfection of the current implementation (as mentioned above). Indistinguishable results were obtained for lattices 642, 1282 and 2562. Figure 5 shows the resulting as a function of the perturbation Cq (in the relevant direction) for 20-spin neighborhood, (consisting of 2826 neighborhoods, where the layer of spins marked by 4 in Figure 1 is considered only via its 9 sums, as explained in Section 2.4) and for 36-spin neighborhood (consisting of 30000 neighborhoods, where all spins marked by 5,6 and 7 are taken via their 17 sums). Each result was averaged over more than hundred iterations (as de ned in Section 4.1) so as to guarantee negligible error bars: smaller than .0004 and .0008, respectively. Each of the two R projections involved in each iteration (see Eq. (13)) was calculated over 105 MC passes on a 1282 grid. The rst approximation for 18 the xed point was obtained from previous steps with less statistics and smaller (8-spin and 12-spin) neighborhoods (consider again Section 4.1). Since the expansion of P in Cq is linear only near the xed point, it is clear that if Cq is too large, the perturbation away from the xed point is too strong and certainly falls o the linear regime. Also, if Cq is too small, the statistical errors, proportional as mentioned to 1=Cq, violate the calculations. Moreover, we found that even when the amount of statistics grows inde nitely, the results for small Cq fall out of the expected linear dependence on Cq. This is due to the truncation error, and can be explained as follows. Each neighborhood Ni that has an entry (P+)i in our P+ table can be regarded as the union of \o springs" (e.g., its \children"): Ni = Sj Nij. Each Nij is a neighborhood coinciding with Ni in its inner layers, and in addition has some speci ed spin signs in the rst layer not included in Ni. It has a frequency wij and a certain probability, P+ ij , for having a positive spin at its center. Clearly(P+)i =Xj w0 ijP+ ij ; (16) where w0 ij = wij=Pk wik. The perturbation Cqq from the xed point changes each P+ ij at the next (renormalized) level by Cq( qi + "ij) + O(C2 q ), where "ij is small (for large Ni). This contributes Cq( qi+Pw0 ij"ij)+O(C2 q ) to (P 2 P 1)i in our algorithm, which has the desired size (although including the small O("ij) error in ). However, the perturbation Cqq also changes the weights w0 ij at the renormalized level, thereby adding an undesired contribution to P 2 P 1, which, by Eq. (16) and the following argument, can be large. For any xed neighborhood Ni, the changes in w0 ij can mainly be regarded as changes in the expected number of negative (or positive) spins among the spins just outside Ni. This number is proportional to the average energy E =< sisj >, where si and sj are neighboring spins (at the renormalized level). Hence the changes in w0 ij are proportional to the change in E. Since Cq is proportional to a corresponding perturbation = T Tc in the temperature, the changes of w0 ij per unit change of Cq are proportional to derivative of E with respect to T , which is the heat capacity Cp. It is well known that Cp diverges at Tc, hence for Cq tending to 0 (vanishing ) the changes in w0 ij per unit change in Cq will be unbounded, thus introducing an unbounded error in . This unbounded error results directly from the truncation of neighborhoods; it can be avoided by suitably increasing their size whenever Cq is reduced. The unbounded truncation error explains the unusual di culties we have experienced in this particular calculation (computing ), unlike other RMG calculations. It implies that to achieve higher accuracy in one cannot reduce Cq before adequately increasing the neighborhood sizes (as well as the amount of statistics, as mentioned above). Thus, for xed neighborhood sizes, there is a limited range of Cq values for which the computed approximation to behaves linearly in Cq. In Figure 5, the linear regime is clearly shown by the excellent linear t drawn for the intermediate values of Cq obtained by comparing to other possible (linear) ts over the data and choosing the one which exhibits minimal (least squares per unit length of the Cq interval) error. The resulting estimate for is obtained by linear extrapolation to Cq = 0. For 20-spin 19 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 2 2.02 2.04 2.06 2.08 2.1 2.12 2.14 Cq T h e re le va n t ei g en va lu e Figure 5: Approximations for as a function of the perturbation Cq (in the relevant direction) using P 20 + ( ) and P 36 + ( ). The (linear) extrapolated values for Cq = 0 are given at the intersections with the y axis. neighborhood we obtained 2:022, for 36-spin neighborhood the improvement was to 2:009. This improvement is not so impressive because even in the 36-spin neighborhood we still have the outer layer of the 20-spin neighborhood (marked 4 in Figure 1) taken only in terms of sums, which introduces an error not much smaller than that resulting from omitting the next layer (marked 5, 6, 7 in Figure 1). Any of these results can, of course, be improved by increasing the amount of statistics and including more neighborhoods. 5 Coarse-tone Monte Carlo Acceleration For a given lattice with a given action (possibly in the form of P+ tables), suppose now that the P+ tables for all its coarser levels (the level of blocks, the level of block of blocks, etc.) are also given. Then a new equilibrium of the given action can accurately and fast be produced using a Monte Carlo coarse-tone equilibration (CFE) method, de ned as follows. First an equilibrium is easily obtained at the coarsest level, by few MC passes with the corresponding P+ table. From this, an equilibrium in the next level is derived, and so on, until the target level (the given lattice) is reached. To obtain an equilibrium in any level of spins given an equilibrium of its blocks, we use \stochastic interpolation", i.e., a number of 20 \compatible Monte Carlo" (CMC) passes. By this we mean Monte Carlo passes at the spin level which keep the values of the blocks unchanged (that is, avoid the processing of every spin whose ipping might change the block variables). The CMC has a very short autocorrelation time: Actually very close to 1 in all our tests. (More generally, for any model: If (and only if) the CMC autocorrelation time is not short, then the de nition chosen for the block variables has been inadequate.) So only few CMC passes are really needed: Their number increases only logarithmically with the desired accuracy; just 4 or 5 of them typically already yield ne results and each additional CMC pass enhances the equilibration by approximately a factor of e (corresponding to autocorrelation time being close to 1). For example, we compared the results for the 2-point correlation function (at distance p2) obtained on a 16 16 lattice employing 4; 6 and 8 CMC passes. The di erence between the results with 6 and 8 CMC passes was :0012, while the di erence between 4 and 8 was :0092. The ratio between the two di erences being 8:0 which is close to e2. If the coarse-level (the block) P+ table has not been fully accurate, the CMC passes should be followed by a small number of regular MC passes, a process we call \post-relaxation" (PR), following classical multigrid nomenclature. In fact, following again this nomenclature, the above process can be viewed as a \half-V-cycle" in which only the second, coarse-tone, part of a multigrid V-cycle is employed. In case of criticality the P+ tables should have been calculated with criticalization, to avoid the drift away from the critical surface, as explained in Section 4.2. An extremely simple way to obtain a very good approximation to equilibrium at the critical temperature on a given lattice with a critical action, is by CFE employing this same action (e.g., the Hamiltonian given by Eq. (1) at Tc) at all levels, with p PR sweeps at each level. We call this process the trivial CFE, or TCFE. The produced con gurations are completely decorrelated as each one is constructed individually starting from a di erent small con guration chosen randomly at equilibrium. We have measured 0 autocorrelation time for various observables on up to 2562 lattices. The cost of a new independent con guration depends on the employed number of CMC passes and on p, but as explained below is independent of the lattice size. It can be shown that the required number p of PR sweeps is small whenever the convergence to a xed point of the renormalization ow is fast. If the Hamiltonian used is fairly close to the xed point (i.e., a good approximation for the xed point is obtained in just few RG steps), then using it on a particular grid produces nearly the correct equilibrium for block-spins of somewhat coarser levels. Thus, the PR is needed mainly to equilibrate only the smaller, local scales. This is indeed evident in the following numerical results, which exhibit the excellent quality of equilibria obtained by TCFE. In Table 2 we present the errors measured for the two-point correlation function (at distance p2 meshsizes). The errors are calculated by comparing to results obtained from long runs of the Wol algorithm. The table shows that the errors are xed as the lattice grow and decrease rapidly with the number of post-relaxations independently of the lattice size. Similar results were obtained for other observables (e.g., the energy). Also, as shown in Figures 3 and 4, very small errors were measured in P 8 + and P 12 + of the rst-coarse-level over an ensemble of con gurations produced by the TCFE (with 4 CMC 21 Table 2: The errors in measuring the two-point correlation function (at distance p2 meshsizes) by TCFE, with 8 CMC passes and p post-relaxation sweeps, on an L L grid. p 0 4 8 16 L = 16 :00513 :00068 :00044 :00017 L = 32 :00682 :00139 :00079 :00039 L = 64 :00772 :00174 :00094 :00049 L = 128 :00825 :00233 :00100 :00049 L = 256 :00851 :00215 :00095 :00046 L = 512 :00869 :00210 :00109 :00059 sweeps) starting from a completely random con guration at the coarsest level. The errors were again calculated by comparing gures with those of the Wol algorithm. Results are shown for p = 0; 1; 2 and 4 PR sweeps for grids 162 and 322, where the amount of work taken into account includes the 4 CMC sweeps, the p PR passes and an additional one which roughly stands for the work accumulated on all coarser levels of the TCFE. Thus, the four results are drawn versus 5; 6; 7 and 9 MC sweeps, respectively. Note the much accelerated pace of convergence per MC pass brought about by the TCFE. Also observe that measurements for the larger neighborhood (e.g., of 12-spins) is less sensitive to nite-size e ects than smaller neighborhood (of 8-spins), hence the former exhibits a more regular behavior than the latter. Remember that these results refer only to the trivial CFE. They can be improved by using on all coarse levels not the nest grid critical Hamiltonian H but the Hamiltonian R(H) (in the form of P+ tables). For example, we have calculated P 12 + and P 20 + of the rst-coarse-level over 4 106 MC sweeps on an 1282 lattice at Tc. As shown in Table 3 the accuracy in calculating the two-point correlation function is signi cantly improved (compare with Table 2) by using these P+ tables on all coarse levels (even with p = 0 PR sweeps). Still 22 further improvement can presumably be obtained by using H at the nest level, R(H) onthe next coarser level and R2(H) on all other levels; etc, provided each additional projectionR involves a proper criticalization. Without such criticalization, on su ciently coarse levelit is better to use the original Hamiltonian H (if it is known to be critical).Table 3: The errors in measuring the two-point correlation function (at distance p2 mesh-sizes) by CFE using P 12+ and P 20+ on all coarse levels with 8 CMC passes and p = 0 on anL L grid.12-spin20-spinneighborhood neighborhoodL = 64:00143:00076L = 128:00135:000896 Extension to Continuous-State ModelsInitial steps of applying the above coarsening and acceleration techniques to the XY modelare reported in [7]. Each 2 2 block spin is here de ned to be the average of its fourconstituent spins, without normalization (whereby the originalXY group of length-1 vectorsis not preserved at the coarse levels). Compared with the 1 majority spins discussedabove, each coarse spin here contains much more information; as a result, much smallerneighborhoods are needed in the probability tables to attain a given truncation accuracy.Still, these tables are more complicated than the above P+ tables, since they should describea continuous distribution, conditioned on continuous neighboring values.To accumulate continuous-variable statistics, one of course partitions the range of thisvariable into bins: Counting the number of MC hits in each bin gives an estimate for theintegral of the continuous variable over that bin. From those integrals, the value of thevariable at any particular point can be interpolated (by a polynomial each of whose integralsover several adjacent bins ts the estimate). The same is true for a vectorial variable, such asthe one representing the entire (truncated) neighborhood, whose bins may each be a tensorproduct of elementary bins, where each elementary bin is one of the bins of one of the realvariables making up the vector. More generally, the bins of the neighborhood are constructedadaptively, similar to the adaptive neighborhoods in the Ising case above, except that herea bin can be split into several bins in two ways: either by adding another variable to the23 description of that particular neighborhood, or by re ning the current bin partition of oneof the existing variables.The set of tests with the XY model reported in [7], though still limited to the simplestneighborhood, clearly indicates that ideal MC performance is obtained in calculating variousthermodynamic limits, such as the two-point correlation and the scaled susceptibility.After further program improvements (more accurate and automatic implementation ofthe rules described above) and further testing of optimality for various observables, we planto extend the RMG techniques to more advanced physical problems, including gauge eldmodels such as U(1), SU(2) and SU(3).AcknowledgmentsWe would like to thank R. H. Swendsen for his enlightening remarks.The research has been supported by grants No. G0289-065.07/93 from The German-IsraeliFoundation for research and development (GIF) and No. 696/97 from the Israel ScienceFoundation, by AFOSR contract F33615-97-D5405 and by the Carl F. Gauss Minerva Centerfor Scienti c Computation at the Weizmann Institute of Science.References[1] R. H. Swendsen and J. S. Wang. Nonuniversal critical dynamics in Monte Carlo simu-lations. Phys. Rev. Lett., 58:86{88, 1987.[2] U. Wol . Collective Monte Carlo updating for spin systems. Phys. Rev. Lett., 62:361{364, 1989.[3] A. Brandt, M. Galun, and D. Ron. Optimal multigrid algorithms for calculating ther-modynamic limits. J. Stat. Phys., 74:313{348, 1994.[4] A. Brandt and M. Galun. Optimal multigrid algorithms for the massive Gaussian modeland path integrals. J. Stat. Phys., 82:1503{1518, 1996.[5] A. Brandt and M. Galun. Optimal multigrid algorithms for variable-coupling isotropicGaussian models. J. Stat. Phys., 88:637{664, 1997.[6] M. Galun. Statistically Optimal multigrid algorithms in statistical physics. Ph. D.Thesis, Weizmann Institute of Science (1998).[7] S. Shmulyian. Towards optimal multigrid Monte Carlo computations in two-dimensionalO(N) non-linear -models. Ph. D. Thesis, Weizmann Institute of Science (1999).[8] A. Brandt. Multiscale Scienti c Computation: Six year Research Summary. In the website http://www.wisdom.weizmann.ac.il/ achi.[9] D. Bai and A. Brandt. Multiscale Computation of Polymer Models. This Proceedings.24 [10] A. Brandt and V. Ilyin. Multiscale Approach in Statistical Phisics of Liquids. ThisProceedings.[11] A. Brandt. General Highly Accurate Algebraic Coarsening Schemes. Electronic Trans.Num. Anal., 10:1{20, 2000.[12] A. Brandt and D. Ron. Statistically Optimal Renormalization Group Flow and Coarse-to-Fine Monte Carlo Acceleretion. Gauss Center Report WI/GC-11, Weizmann Instituteof Science (1999).[13] A. Brandt and D. Ron. Renormalization Multigrid (RMG): Action Perfection and Slow-Down Elimination. Submitted to Phys. Rev. Lett., July 1999.[14] A. Brandt and D. Ron. RenormalizationMultigrid (RMG): Statistically Optimal Renor-malization Group Flow and Coarse-to-Fine Monte Carlo Acceleretion. Submitted to J.Stat. Phys, July 1999.[15] W. Janke. Monte Carlo simulations of spin systems. In Computational Physics, pages10{43, Berlin, 1996. Springer. Ho man K. H. and Schreiber M. (Eds.).[16] A. Brandt and M. Galun. Statistically Optimal multigrid algorithms for the anharmoniccrystal model. Gauss Center Report WI/GC-9, Weizmann Institute of Science (1998).[17] R. Gupta. Open problems in Monte Carlo renormalization group: Application to criticalphenomena. J. Appl. Phys., 61:3605{3611, 1987.[18] R. H. Swendsen. Monte Carlo Renormalization. In T. W. Burkhardt and J. M. J. vanLeeuwen, editors, Real-Space Renormalization, pages 57{86. Springer, Berlin, 1982.[19] C. F. Baillie, R. Gupta, K. A. Hawick, and G. S. Pawley. Monte Carlo renormalization-group study of the three-dimensional Ising model. Phys. Rev. B, 45:10438{10453, 1992.[20] A. Brandt. Multilevel computations of integral transforms and particle interactions withoscillatory kernels. Comp. Phys. Comm., 65:24{38, 1991.[21] B. Sandak and A. Brandt. Multiscale Fast Summation of Long Range Charge andDipolar Interactions. This Proceedings.[22] R. B. Gri ths and P. A. Pearce. Mathematical Properties of Position-SpaceRenormalization-Group Transformations. J. Stat. Phys., 20:499{545, 1979.[23] A. C. D. van Enter, R. Fernandez, and A. D. Sokal. Regularity Properties and Patholo-gies of Position-Space Renormalization-Group Transformations: Scope and Limititionsof Gibbsian Theory. J. Stat. Phys., 72:879{1167, 1993.25

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Conditional Expectations and Renormalization

In optimal prediction methods one estimates the future behavior of underresolved systems by solving reduced systems of equations for expectations conditioned by partial data; renormalization group methods reduce the number of variables in complex systems through integration of unwanted scales. We establish the relation between these methods for systems in thermal equilibrium, and use this relat...

متن کامل

Monte Carlo Renormalization Group Analysis of Lattice φ 4 Model in D = 3 , 4

We formulate theoretical basis of recently proposed finite-size scaling method for estimation and elimination of sub-leading scaling field in Monte Carlo simulation of critical phenomena. We also applied the finite-size scaling method to D = 3, 4 lattice φ 4 model, and obtained renormalization flow diagram of leading and sub-leading scaling field. For D = 4, result is compared to finite-size pe...

متن کامل

Truncation Effects in Monte Carlo Renormalization Group Improved Lattice Actions

We study truncation effects in the SU(3) gauge actions obtained by the Monte Carlo renormalization group method. By measuring the heavy quark potential we find that the truncation effects in the actions coarsen the lattice by 40-50% from the original blocked lattice. On the other hand, we find that rotational symmetry of the heavy quark potentials is well recovered on such coarse lattices, whic...

متن کامل

String tension and removal of lattice coarsening effects in Monte Carlo Renormalization Group

We study the computation of the static quark potential under decimations in the Monte Carlo Renormalization Group (MCRG). Employing a multi-representation plaquette action, we find that fine-tuning the decimation prescription so that the MCRG equilibrium self-consistency condition is satisfied produces dramatic improvement at large distances. In particular, lattice coarsening (change of effecti...

متن کامل

Typeset Using Revt E X 1

We present a simple, sophisticated method to capture renormalization group flow in Monte Carlo simulation, which provides important information of critical phenomena. We applied the method to D = 3, 4 lattice φ 4 model and obtained renormalization flow diagram which well reproduces theoretically predicted behavior of continuum φ 4 model. We also show that the method can be easily applied to muc...

متن کامل

Cluster Algorithm Renormalization Group Method

on the lattice to study critical systems numerically. We illustrate it by means of the 2D Ising model. We compute the critical exponents ν and η and the renormalization group flow of the probability density function of the magnetization. The results, compared to the standard Monte Carlo Renormalization Group proposed by Swendsen [1], are very accurate and the method works faster by a factor whi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999